Goto

Collaborating Authors

 improved algorithm



Improved Algorithms for Neural Active Learning

Neural Information Processing Systems

We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting. In particular, we introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work. Then, the proposed algorithm leverages the powerful representation of NNs for both exploitation and exploration, has the query decision-maker tailored for $k$-class classification problems with the performance guarantee, utilizes the full feedback, and updates parameters in a more practical and efficient manner. These careful designs lead to an instance-dependent regret upper bound, roughly improving by a multiplicative factor $O(\log T)$ and removing the curse of input dimensionality. Furthermore, we show that the algorithm can achieve the same performance as the Bayes-optimal classifier in the long run under the hard-margin setting in classification problems. In the end, we use extensive experiments to evaluate the proposed algorithm and SOTA baselines, to show the improved empirical performance.


Improved Algorithms for Convex-Concave Minimax Optimization

Neural Information Processing Systems

Our bound achieves linear convergence rate and tighter dependency on condition numbers, especially when $L_{\x\y}\ll L$ (i.e., the weak interaction regime). Via simple reduction, our new bound also implies improved bounds for strongly convex-concave problems and convex-concave problems.


Improved Algorithms for Online Submodular Maximization via First-order Regret Bounds

Neural Information Processing Systems

We consider the problem of nonnegative submodular maximization in the online setting. At time step t, an algorithm selects a set S t. The goal is to design an efficient algorithm for minimizing the expected approximate regret. In this work, we give a general approach for improving regret bounds in online submodular maximization by exploiting "first-order" regret bounds for online linear optimization.


Improved Algorithms for Collaborative PAC Learning

Neural Information Processing Systems

We study a recent model of collaborative PAC learning where $k$ players with $k$ different tasks collaborate to learn a single classifier that works for all tasks. Previous work showed that when there is a classifier that has very small error on all tasks, there is a collaborative algorithm that finds a single classifier for all tasks and has $O((\ln (k))^2)$ times the worst-case sample complexity for learning a single task.



Improved Algorithms for Contextual Dynamic Pricing

Neural Information Processing Systems

In contextual dynamic pricing, a seller sequentially prices goods based on contextual information. Buyers will purchase products only if the prices are below their valuations.The goal of the seller is to design a pricing strategy that collects as much revenue as possible. We focus on two different valuation models. The first assumes that valuations linearly depend on the context and are further distorted by noise. Under minor regularity assumptions, our algorithm achieves an optimal regret bound of \tilde{\mathcal{O}}(T {2/3}), improving the existing results.


Review for NeurIPS paper: Improved Algorithms for Convex-Concave Minimax Optimization

Neural Information Processing Systems

Relation to Prior Work: In addition to the papers mentioned above, the relation to the literature of monotone VI should be discussed in details. The convex-concave min-max falls into the category of monotone VIs and there are many works in the literature addressing that. For example, the following papers should be discussed: Ronald E Bruck Jr. Dual extrapolation and its applications to solving variational inequalities and related problems. Solving variational inequalities with monotone operators on domains given by linear minimization oracles.



Review for NeurIPS paper: Improved Algorithms for Online Submodular Maximization via First-order Regret Bounds

Neural Information Processing Systems

Weaknesses: This paper has a few weaknesses which are as follows: - Although the authors have cited [9] and [10] as previous literature on online submodular maximization, they have not compared their obtained regret bounds with the bounds of [9] and [10]. These two papers consider the online continuous submodular maximization problem (as opposed to submodular set function maximization in this work) and may seem different in the first look, however, since the multilinear extension of submodular set functions are continuous submodular, combination of the continuous algorithms in [9] and [10] and a lossless rounding technique such as pipage rounding could be used for discrete problems. In particular, the algorithms of [9] and [10] are almost identical to Algorithm 1 of this paper and the only novelty of Algorithm 1 is the use of g f-l (as opposed to f) as the utility function which leads to \alpha-regret bounds with curvature-dependent approximation ratio \alpha. I noticed that this issue has been addressed in more detail in the appendix, however, I think it's important to discuss the computational complexity of the discretized version of Algorithm 1 in the paper as well. In particular, implementing this algorithm on a real-world problem and comparing the performance for different discretization levels and various number of samples could have made the paper even better.